Machine Learning : Proceedings of the Thirteenth International Conference , 1996 Bias Plus Variance Decomposition forZero - One Loss

نویسندگان

  • Ron Kohavi
  • David H. Wolpert
چکیده

We present a bias-variance decomposition of expected misclassiication rate, the most commonly used loss function in supervised classiication learning. The bias-variance decomposition for quadratic loss functions is well known and serves as an important tool for analyzing learning algorithms, yet no decomposition was ooered for the more commonly used zero-one (misclassiication) loss functions until the recent work of Kong & Dietterich (1995) and Breiman (1996). Their decomposition suuers from some major shortcomings though (e.g., potentially negative variance), which our decomposition avoids. We show that, in practice, the naive frequency-based estimation of the decomposition terms is by itself biased and show how to correct for this bias. We illustrate the decomposition on various algorithms and datasets from the UCI repository.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Bias Plus Variance Decomposition for Zero-One Loss Functions

We present a bias variance decomposition of expected misclassi cation rate the most commonly used loss function in supervised classi cation learning The bias variance decomposition for quadratic loss functions is well known and serves as an important tool for analyzing learning algorithms yet no decomposition was o ered for the more commonly used zero one misclassi cation loss functions until t...

متن کامل

A Unified Bias-Variance Decomposition for Zero-One and Squared Loss

The bias-variance decomposition is a very useful and widely-used tool for understanding machine-learning algorithms. It was originally developed for squared loss. In recent years, several authors have proposed decompositions for zero-one loss, but each has significant shortcomings. In particular, all of these decompositions have only an intuitive relationship to the original squared-loss one. I...

متن کامل

Appendix : Machine Learning Bias Versus Statistical Bias

is if and 0 if. This high variance may help to explain why there is selection pressure for weak (machine learning) bias when the (machine learning) bias correctness is low. The reason that statisticians are interested in (statistical) bias and variance is that squared error is equal to the sum of squared (statistical) bias and variance. Therefore minimal (statistical) bias and minimal variance ...

متن کامل

Appendix : Machine Learning Bias Versus Statistical Bias

is if and 0 if. This high variance may help to explain why there is selection pressure for weak (machine learning) bias when the (machine learning) bias correctness is low. The reason that statisticians are interested in (statistical) bias and variance is that squared error is equal to the sum of squared (statistical) bias and variance. Therefore minimal (statistical) bias and minimal variance ...

متن کامل

Appendix : Machine Learning Bias Versus Statistical Bias

is if and 0 if. This high variance may help to explain why there is selection pressure for weak (machine learning) bias when the (machine learning) bias correctness is low. The reason that statisticians are interested in (statistical) bias and variance is that squared error is equal to the sum of squared (statistical) bias and variance. Therefore minimal (statistical) bias and minimal variance ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1996